Fuzzy matrices play a crucial role in fuzzy logic and fuzzy systems. This paper investigates the problem of supervised learning fuzzy matrices through sample pairs of input–output fuzzy vectors, where the fuzzy matrix inference mechanism is based on the max–min composition method. We propose an optimization approach based on stochastic gradient descent (SGD), which defines an objective function by using the mean squared error and incorporates constraints on the matrix elements (ensuring they take values within the interval [0, 1]). To address the non-smoothness of the max–min composition rule, a modified smoothing function for max–min is employed, ensuring stability during optimization. The experimental results demonstrate that the proposed method achieves high learning accuracy and convergence across multiple randomly generated input–output vector samples.
Loading....